1
從提示到流程:LangChain 協調的全面概覽
AI010Lesson 6
00:00

從提示到流程

LLM 互動的演進

在之前的課程中,我們專注於單一提示的互動。然而,現實世界中的應用需要的不僅僅是一次性的問答。為了建立可擴展的人工智慧系統,我們必須轉向協調。這包括將多個 LLM 調用連結起來,根據使用者輸入進行分支邏輯判斷,並讓模型與外部資料互動。

協調的基礎構建模塊

  • LLMChain 基礎單位。它將提示模板與語言模型結合。
  • 順序鏈 它們允許你建立一個多步工作流程,其中前一步的輸出會成為下一步的輸入。
  • 路由鏈 它們如同「交通控制員」,利用 LLM 決定哪個專業化的子鏈應處理特定請求(例如,將數學問題發送至「數學鏈」,歷史問題則發送至「歷史鏈」)。

核心原則:鏈式規則

鏈能將多個組件——模型、提示和記憶——整合為單一且連貫的應用程式。這種模組化設計確保複雜任務可被分解為可管理且可除錯的步驟。

小技巧:除錯流程
當你的流程變得複雜時,請使用langchain.debug = True。此「穿透式視野」可讓你在流程的每個階段,清楚看見實際送出的提示以及接收到的原始輸出。
sequential_chain.py
TERMINALbash — 80x24
> Ready. Click "Run" to execute.
>
Question 1
In LangChain, what is the primary difference between a SimpleSequentialChain and a standard SequentialChain?
SimpleSequentialChain supports multiple input variables, while SequentialChain does not.
SimpleSequentialChain only supports a single input and single output flowing between steps.
Only SequentialChain can be used with ChatOpenAI models.
Challenge: Library Support Router
Design a routing mechanism for a specialized bot.
You are building a support bot for a library.

Define the logic for a RouterChain that distinguishes between "Book Recommendations" and "Operating Hours."
Step 1
Create two prompt templates: one for book suggestions and one for library schedule info.
Solution:
book_template = """You are a librarian. Recommend books based on: {input}"""
schedule_template = """You are a receptionist. Answer hours queries: {input}"""

prompt_infos = [
    {"name": "books", "description": "Good for recommending books", "prompt_template": book_template},
    {"name": "schedule", "description": "Good for answering operating hours", "prompt_template": schedule_template}
]
Step 2
Define the router_template to guide the LLM on how to classify the user's intent, and initialize the chain.
Solution:
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
    destinations=destinations_str
)
router_prompt = PromptTemplate(
    template=router_template,
    input_variables=["input"],
    output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)

chain = MultiPromptChain(
    router_chain=router_chain,
    destination_chains=destination_chains,
    default_chain=default_chain,
    verbose=True
)